27 research outputs found

    Self-Similar Anisotropic Texture Analysis: the Hyperbolic Wavelet Transform Contribution

    Full text link
    Textures in images can often be well modeled using self-similar processes while they may at the same time display anisotropy. The present contribution thus aims at studying jointly selfsimilarity and anisotropy by focusing on a specific classical class of Gaussian anisotropic selfsimilar processes. It will first be shown that accurate joint estimates of the anisotropy and selfsimilarity parameters are performed by replacing the standard 2D-discrete wavelet transform by the hyperbolic wavelet transform, which permits the use of different dilation factors along the horizontal and vertical axis. Defining anisotropy requires a reference direction that needs not a priori match the horizontal and vertical axes according to which the images are digitized, this discrepancy defines a rotation angle. Second, we show that this rotation angle can be jointly estimated. Third, a non parametric bootstrap based procedure is described, that provides confidence interval in addition to the estimates themselves and enables to construct an isotropy test procedure, that can be applied to a single texture image. Fourth, the robustness and versatility of the proposed analysis is illustrated by being applied to a large variety of different isotropic and anisotropic self-similar fields. As an illustration, we show that a true anisotropy built-in self-similarity can be disentangled from an isotropic self-similarity to which an anisotropic trend has been superimposed

    Wavelets techniques for pointwise anti-Holderian irregularity

    Full text link
    In this paper, we introduce a notion of weak pointwise Holder regularity, starting from the de nition of the pointwise anti-Holder irregularity. Using this concept, a weak spectrum of singularities can be de ned as for the usual pointwise Holder regularity. We build a class of wavelet series satisfying the multifractal formalism and thus show the optimality of the upper bound. We also show that the weak spectrum of singularities is disconnected from the casual one (denoted here strong spectrum of singularities) by exhibiting a multifractal function made of Davenport series whose weak spectrum di ers from the strong one

    On the dimension of graphs of Weierstrass-type functions with rapidly growing frequencies

    Full text link
    We determine the Hausdorff and box dimension of the fractal graphs for a general class of Weierstrass-type functions of the form f(x)=n=1ang(bnx+θn)f(x) = \sum_{n=1}^\infty a_n \, g(b_n x + \theta_n), where gg is a periodic Lipschitz real function and an+1/an0a_{n+1}/a_n \to 0, bn+1/bnb_{n+1}/b_n \to \infty as nn \to \infty. Moreover, for any H,B[1,2]H, B \in [1, 2], HBH \leq B we provide examples of such functions with \dim_H(\graph f) = \underline{\dim}_B(\graph f) = H, \bar{\dim}_B(\graph f) = B.Comment: 18 page

    LARGE SCALE REDUCTION PRINCIPLE AND APPLICATION TO HYPOTHESIS TESTING

    No full text
    Abstract. Consider a non-linear function G(Xt) where Xt is a stationary Gaussian sequence with long-range dependence. The usual reduction principle states that the partial sums of G(Xt) behave asymptotically like the partial sums of the first term in the expansion of G in Hermite polynomials. In the context of the wavelet estimation of the long-range dependence parameter, one replaces the partial sums of G(Xt) by the wavelet scalogram, namely the partial sum of squares of the wavelet coefficients. Is there a reduction principle in the wavelet setting, namely is the asymptotic behavior of the scalogram for G(Xt) the same as that for the first term in the expansion of G in Hermite polynomial? The answer is negative in general. This paper provides a minimal growth condition on the scales of the wavelet coefficients which ensures that the reduction principle also holds for the scalogram. The results are applied to testing the hypothesis that the long-range dependence parameter takes a specific value

    Decision tree for uncertainty measures

    No full text
    International audienceThe ensemble methods are popular machine learning techniques which are powerful when one wants to deal with both classification or prediction problems. A set of classifiers (regression or classification trees) is constructed, and the classification or the prediction of a new data instance is done by tacking a weighted vote. A tree is a piece-wise constant estimator on partitions obtained from the data. These partitions are induced by recursive dyadic split of the set of input variables. For example, CART (Classification And Regression Trees) [1] is an efficient algorithm for the construction of a tree. The goal is to partition the space of input variable values in the most as possible "homogeneous" K disjoint regions. More precisely, each partitioning value has to minimize a risk function. However, in practice, experimental measures can be observed with uncertainty. This work proposes to extend CART algorithm to this kind of data. We present an induced model adapted to uncertainty data and both a prediction and split rule for a tree construction taking into account the uncertainty of each quantitative observation from the data base.Les méthodes d'ensembles sont très populaires et performantes sur des prob-lématiques de classification ou de prédiction. Elles sont basées sur l'agrégation de plusieurs classifieurs, qui sont des arbres de régression ou de classification. On considère une pondération de leurs prédictions pour prédire une valeur ou une classe d'une nouvelle instance de donnée. Concrètement, un arbre est un estimateur constant par morceaux sur des partitions disjointes de l'espace des variables d'entrées. Ces partitions sont construites par des divisions dyadiques récursives de l'ensemble des variables d'entrées qui minimisent une fonction de risque. En pratique, les observations d'une base de données sont sous-jacentes à des mesures qui peuvent présenter une incertitude. Ce travail propose d'étendre la construction d'un arbre de régression tel que CART à ce type de données. Nous introduisons d'abord la modélisation induite et adaptée à des données incertaines, puis nous présentons des règles de partitionnement et de prédiction pour la construction d'arbres prenant en compte l'incertitude de chaque observation quantitative d'une base de données
    corecore